【综述专栏】思考无标注数据的可用极限
在科学研究中,从方法论上来讲,都应“先见森林,再见树木”。当前,人工智能学术研究方兴未艾,技术迅猛发展,可谓万木争荣,日新月异。对于AI从业者来说,在广袤的知识森林中,系统梳理脉络,才能更好地把握趋势。为此,我们精选国内外优秀的综述文章,开辟“综述专栏”,敬请关注。
01
02
03
04
05
[1] Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
[2] Unsupervised Data Augmentation for Consistency Training
[3] MixMatch: A Holistic Approach to Semi-Supervised Learning
[4]Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
[5] Billion-scale semi-supervised learning for image classification
[6] Data Distillation: Towards Omni-Supervised Learning
[7] Exploring the limits of weakly supervised pretraining
[8] Automatic labeling of data for transfer learning
[9] BERT: pre-training of deep bidirectional transformers for language understanding
[10] Scaling and benchmarking self-supervised visual representation learning
[11]AutoAugment: Learning Augmentation Policies from Data
[12]Language Models are Unsupervised Multitask Learners
[13] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
[14] Scaling and Benchmarking Self-Supervised Visual Representation Learning
[15] Selfie: Self-supervised Pretraining for Image Embedding
[16] Rethinking ImageNet Pre-training
[17] Revisiting unreasonable effectiveness of data in deep learning era
本文目的在于学术交流,并不代表本公众号赞同其观点或对其内容真实性负责,版权归原作者所有,如有侵权请告知删除。
“综述专栏”历史文章
Graph Embedding
ICRA 2021自动驾驶相关论文汇总
IJCAI 2021| 基于图学习的推荐系统综述
排序学习(Learning to rank)综述
零样本文本分类探秘
重磅发布 | 图像图形学发展年度报告(中国图象图形学报第6期综述专刊)
域适应(UDA)和半监督(SSL)的恩怨情仇
Meta Learning — Introduction to meta-learning
网络模型加速——轻量化网络
关于GNN的几个疑问的思考
图卷积神经网络(GCN)速览
从零到一:生成对抗网络GAN完全掌握
目标检测综述整理
Bert向量表示不能直接用于相似度问题的分析
更多综述专栏文章,
请点击文章底部“阅读原文”查看
分享、点赞、在看,给个三连击呗!